1,825 research outputs found

    Voltage-dependent Block of the Cystic Fibrosis Transmembrane Conductance Regulator Cl- Channel by Two Closely Related Arylaminobenzoates

    Get PDF
    The gene defective in cystic fibrosis encodes a Cl- channel, the cystic fibrosis transmembrane conductance regulator (CFTR). CFTR is blocked by diphenylamine-2-carboxylate (DPC) when applied extracellularly at millimolar concentrations. We studied the block of CFTR expressed in Xenopus oocytes by DPC or by a closely related molecule, flufenamic acid (FFA). Block of whole-cell CFTR currents by bath-applied DPC or by FFA, both at 200 µM, requires several minutes to reach full effect. Blockade is voltage dependent, suggesting open-channel block: currents at positive potentials are not affected but currents at negative potentials are reduced. The binding site for both drugs senses ~40% of the electric field across the membrane, measured from the inside. In single-channel recordings from excised patches without blockers, the conductance was 8.0 ± 0.4 pS in symmetric 150 mM Cl^-. A subconductance state, measuring ~60% of the main conductance, was often observed. Bursts to the full open state lasting up to tens of seconds were uninterrupted at depolarizing membrane voltages. At hyperpolarizing voltages, bursts were interrupted by brief closures. Either DPC or FFA (50 µM) applied to the cytoplasmic or extracellular face of the channel led to an increase in flicker at V_m =-100 mV and not at V_m = +100 mV, in agreement with whole-cell experiments. DPC induced a higher frequency of flickers from the cytoplasmic side than the extracellular side. FFA produced longer closures than DPC; the FFA closed time was roughly equal (~ 1.2 ms) at -100 mV with application from either side. In cell-attached patch recordings with DPC or FFA applied to the bath, there was flickery block at V_m = -100 mV, confirming that the drugs permeate through the membrane to reach the binding site. The data are consistent with the presence of a single binding site for both drugs, reached from either end of the channel. Open-channel block by DPC or FFA may offer tools for use with site-directed mutagenesis to describe the permeation pathway

    Poisson transition rates from time-domain measurements with finite bandwidth

    Full text link
    In time-domain measurements of a Poisson two-level system, the observed transition rates are always smaller than those of the actual system, a general consequence of finite measurement bandwidth in an experiment. This underestimation of the rates is significant even when the measurement and detection apparatus is ten times faster than the process under study. We derive here a quantitative form for this correction using a straightforward state-transition model that includes the detection apparatus, and provide a method for determining a system's actual transition rates from bandwidth-limited measurements. We support our results with computer simulations and experimental data from time-domain measurements of quasiparticle tunneling in a single-Cooper-pair transistor.Comment: 4 pages, 5 figure

    The Critical Coupling Likelihood Method: A new approach for seamless integration of environmental and operating conditions of gravitational wave detectors into gravitational wave searches

    Get PDF
    Any search effort for gravitational waves (GW) using interferometric detectors like LIGO needs to be able to identify if and when noise is coupling into the detector's output signal. The Critical Coupling Likelihood (CCL) method has been developed to characterize potential noise coupling and in the future aid GW search efforts. By testing two hypotheses about pairs of channels, CCL is able to identify undesirable coupled instrumental noise from potential GW candidates. Our preliminary results show that CCL can associate up to 80\sim 80% of observed artifacts with SNR8SNR \geq 8, to local noise sources, while reducing the duty cycle of the instrument by 15\lesssim 15%. An approach like CCL will become increasingly important as GW research moves into the Advanced LIGO era, going from the first GW detection to GW astronomy.Comment: submitted CQ

    Teleology and Realism in Leibniz's Philosophy of Science

    Get PDF
    This paper argues for an interpretation of Leibniz’s claim that physics requires both mechanical and teleological principles as a view regarding the interpretation of physical theories. Granting that Leibniz’s fundamental ontology remains non-physical, or mentalistic, it argues that teleological principles nevertheless ground a realist commitment about mechanical descriptions of phenomena. The empirical results of the new sciences, according to Leibniz, have genuine truth conditions: there is a fact of the matter about the regularities observed in experience. Taking this stance, however, requires bringing non-empirical reasons to bear upon mechanical causal claims. This paper first evaluates extant interpretations of Leibniz’s thesis that there are two realms in physics as describing parallel, self-sufficient sets of laws. It then examines Leibniz’s use of teleological principles to interpret scientific results in the context of his interventions in debates in seventeenth-century kinematic theory, and in the teaching of Copernicanism. Leibniz’s use of the principle of continuity and the principle of simplicity, for instance, reveal an underlying commitment to the truth-aptness, or approximate truth-aptness, of the new natural sciences. The paper concludes with a brief remark on the relation between metaphysics, theology, and physics in Leibniz

    Imaging the Earth's Interior: the Angular Distribution of Terrestrial Neutrinos

    Full text link
    Decays of radionuclides throughout the Earth's interior produce geothermal heat, but also are a source of antineutrinos. The (angle-integrated) geoneutrino flux places an integral constraint on the terrestrial radionuclide distribution. In this paper, we calculate the angular distribution of geoneutrinos, which opens a window on the differential radionuclide distribution. We develop the general formalism for the neutrino angular distribution, and we present the inverse transformation which recovers the terrestrial radioisotope distribution given a measurement of the neutrino angular distribution. Thus, geoneutrinos not only allow a means to image the Earth's interior, but offering a direct measure of the radioactive Earth, both (1) revealing the Earth's inner structure as probed by radionuclides, and (2) allowing for a complete determination of the radioactive heat generation as a function of radius. We present the geoneutrino angular distribution for the favored Earth model which has been used to calculate geoneutrino flux. In this model the neutrino generation is dominated by decays in the Earth's mantle and crust; this leads to a very ``peripheral'' angular distribution, in which 2/3 of the neutrinos come from angles > 60 degrees away from the downward vertical. We note the possibility of that the Earth's core contains potassium; different geophysical predictions lead to strongly varying, and hence distinguishable, central intensities (< 30 degrees from the downward vertical). Other uncertainties in the models, and prospects for observation of the geoneutrino angular distribution, are briefly discussed. We conclude by urging the development and construction of antineutrino experiments with angular sensitivity. (Abstract abridged.)Comment: 25 pages, RevTeX, 7 figures. Comments welcom

    Random template placement and prior information

    Full text link
    In signal detection problems, one is usually faced with the task of searching a parameter space for peaks in the likelihood function which indicate the presence of a signal. Random searches have proven to be very efficient as well as easy to implement, compared e.g. to searches along regular grids in parameter space. Knowledge of the parameterised shape of the signal searched for adds structure to the parameter space, i.e., there are usually regions requiring to be densely searched while in other regions a coarser search is sufficient. On the other hand, prior information identifies the regions in which a search will actually be promising or may likely be in vain. Defining specific figures of merit allows one to combine both template metric and prior distribution and devise optimal sampling schemes over the parameter space. We show an example related to the gravitational wave signal from a binary inspiral event. Here the template metric and prior information are particularly contradictory, since signals from low-mass systems tolerate the least mismatch in parameter space while high-mass systems are far more likely, as they imply a greater signal-to-noise ratio (SNR) and hence are detectable to greater distances. The derived sampling strategy is implemented in a Markov chain Monte Carlo (MCMC) algorithm where it improves convergence.Comment: Proceedings of the 8th Edoardo Amaldi Conference on Gravitational Waves. 7 pages, 4 figure

    Designing and Piloting a Tool for the Measurement of the Use of Pronunciation Learning Strategies

    Get PDF
    What appears to be indispensable to drive the field forward and ensure that research findings will be comparable across studies and provide a sound basis for feasible pedagogic proposals is to draw up a classification of PLS and design on that basis a valid and reliable data collection tool which could be employed to measure the use of these strategies in different groups of learners, correlate it with individual and contextual variables, and appraise the effects of training programs. In accordance with this rationale, the present paper represents an attempt to propose a tentative categorization of pronunciation learning strategies, adopting as a point of reference the existing taxonomies of strategic devices (i.e. O'Malley and Chamot 1990; Oxford 1990) and the instructional options teachers have at their disposal when dealing with elements of this language subsystem (e.g. Kelly 2000; Goodwin 2001). It also introduces a research instrument designed on the basis of the classification that shares a number of characteristics with Oxford's (1990) Strategy Inventory for Language Learning but, in contrast to it, includes both Likert-scale and open-ended items. The findings of a pilot study which involved 80 English Department students demonstrate that although the tool requires considerable refinement, it provides a useful point of departure for future research into PLS

    A geoneutrino experiment at Homestake

    Get PDF
    A significant fraction of the 44TW of heat dissipation from the Earth's interior is believed to originate from the decays of terrestrial uranium and thorium. The only estimates of this radiogenic heat, which is the driving force for mantle convection, come from Earth models based on meteorites, and have large systematic errors. The detection of electron antineutrinos produced by these uranium and thorium decays would allow a more direct measure of the total uranium and thorium content, and hence radiogenic heat production in the Earth. We discuss the prospect of building an electron antineutrino detector approximately 700m^3 in size in the Homestake mine at the 4850' level. This would allow us to make a measurement of the total uranium and thorium content with a statistical error less than the systematic error from our current knowledge of neutrino oscillation parameters. It would also allow us to test the hypothesis of a naturally occurring nuclear reactor at the center of the Earth.Comment: proceedings for Neutrino Sciences 2005, submitted to Earth, Moon, and Planet

    Geo-neutrinos: A systematic approach to uncertainties and correlations

    Get PDF
    Geo-neutrinos emitted by heat-producing elements (U, Th and K) represent a unique probe of the Earth interior. The characterization of their fluxes is subject, however, to rather large and highly correlated uncertainties. The geochemical covariance of the U, Th and K abundances in various Earth reservoirs induces positive correlations among the associated geo-neutrino fluxes, and between these and the radiogenic heat. Mass-balance constraints in the Bulk Silicate Earth (BSE) tend instead to anti-correlate the radiogenic element abundances in complementary reservoirs. Experimental geo-neutrino observables may be further (anti)correlated by instrumental effects. In this context, we propose a systematic approach to covariance matrices, based on the fact that all the relevant geo-neutrino observables and constraints can be expressed as linear functions of the U, Th and K abundances in the Earth's reservoirs (with relatively well-known coefficients). We briefly discuss here the construction of a tentative "geo-neutrino source model" (GNSM) for the U, Th, and K abundances in the main Earth reservoirs, based on selected geophysical and geochemical data and models (when available), on plausible hypotheses (when possible), and admittedly on arbitrary assumptions (when unavoidable). We use then the GNSM to make predictions about several experiments ("forward approach"), and to show how future data can constrain - a posteriori - the error matrix of the model itself ("backward approach"). The method may provide a useful statistical framework for evaluating the impact and the global consistency of prospective geo-neutrino measurements and Earth models.Comment: 17 pages, including 4 figures. To appear on "Earth, Moon, and Planets," Special Issue on "Neutrino Geophysics," Proceedings of Neutrino Science 2005 (Honolulu, Hawaii, Dec. 2005
    corecore